Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Cir Esp (Engl Ed) ; 2024 May 03.
Article in English | MEDLINE | ID: mdl-38704146

ABSTRACT

Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.

2.
Article in English | MEDLINE | ID: mdl-38678488

ABSTRACT

PURPOSE: Advances in deep learning have resulted in effective models for surgical video analysis; however, these models often fail to generalize across medical centers due to domain shift caused by variations in surgical workflow, camera setups, and patient demographics. Recently, object-centric learning has emerged as a promising approach for improved surgical scene understanding, capturing and disentangling visual and semantic properties of surgical tools and anatomy to improve downstream task performance. In this work, we conduct a multicentric performance benchmark of object-centric approaches, focusing on critical view of safety assessment in laparoscopic cholecystectomy, then propose an improved approach for unseen domain generalization. METHODS: We evaluate four object-centric approaches for domain generalization, establishing baseline performance. Next, leveraging the disentangled nature of object-centric representations, we dissect one of these methods through a series of ablations (e.g., ignoring either visual or semantic features for downstream classification). Finally, based on the results of these ablations, we develop an optimized method specifically tailored for domain generalization, LG-DG, that includes a novel disentanglement loss function. RESULTS: Our optimized approach, LG-DG, achieves an improvement of 9.28% over the best baseline approach. More broadly, we show that object-centric approaches are highly effective for domain generalization thanks to their modular approach to representation learning. CONCLUSION: We investigate the use of object-centric methods for unseen domain generalization, identify method-agnostic factors critical for performance, and present an optimized approach that substantially outperforms existing methods.

3.
Br J Surg ; 111(1)2024 Jan 03.
Article in English | MEDLINE | ID: mdl-37935636

ABSTRACT

The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.


Subject(s)
Cholecystectomy, Laparoscopic , Humans , Artificial Intelligence
4.
IEEE Trans Med Imaging ; 43(3): 1247-1258, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37971921

ABSTRACT

Assessing the critical view of safety in laparoscopic cholecystectomy requires accurate identification and localization of key anatomical structures, reasoning about their geometric relationships to one another, and determining the quality of their exposure. Prior works have approached this task by including semantic segmentation as an intermediate step, using predicted segmentation masks to then predict the CVS. While these methods are effective, they rely on extremely expensive ground-truth segmentation annotations and tend to fail when the predicted segmentation is incorrect, limiting generalization. In this work, we propose a method for CVS prediction wherein we first represent a surgical image using a disentangled latent scene graph, then process this representation using a graph neural network. Our graph representations explicitly encode semantic information - object location, class information, geometric relations - to improve anatomy-driven reasoning, as well as visual features to retain differentiability and thereby provide robustness to semantic errors. Finally, to address annotation cost, we propose to train our method using only bounding box annotations, incorporating an auxiliary image reconstruction objective to learn fine-grained object boundaries. We show that our method not only outperforms several baseline methods when trained with bounding box annotations, but also scales effectively when trained with segmentation masks, maintaining state-of-the-art performance.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Semantics
5.
Med Image Anal ; 89: 102888, 2023 10.
Article in English | MEDLINE | ID: mdl-37451133

ABSTRACT

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Subject(s)
Artificial Intelligence , Surgery, Computer-Assisted , Humans , Endoscopy , Algorithms , Surgery, Computer-Assisted/methods , Surgical Instruments
6.
Med Image Anal ; 88: 102844, 2023 08.
Article in English | MEDLINE | ID: mdl-37270898

ABSTRACT

The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.


Subject(s)
Computers , Neural Networks, Computer , Humans , Supervised Machine Learning
7.
Med Image Anal ; 86: 102803, 2023 05.
Article in English | MEDLINE | ID: mdl-37004378

ABSTRACT

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Subject(s)
Benchmarking , Laparoscopy , Humans , Algorithms , Operating Rooms , Workflow , Deep Learning
8.
IEEE Trans Med Imaging ; 42(7): 1920-1931, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36374877

ABSTRACT

Recent advancements in deep learning methods bring computer-assistance a step closer to fulfilling promises of safer surgical procedures. However, the generalizability of such methods is often dependent on training on diverse datasets from multiple medical institutions, which is a restrictive requirement considering the sensitive nature of medical data. Recently proposed collaborative learning methods such as Federated Learning (FL) allow for training on remote datasets without the need to explicitly share data. Even so, data annotation still represents a bottleneck, particularly in medicine and surgery where clinical expertise is often required. With these constraints in mind, we propose FedCy, a federated semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos, thereby improving performance on the task of surgical phase recognition. By leveraging temporal patterns in the labeled data, FedCy helps guide unsupervised training on unlabeled data towards learning task-specific features for phase recognition. We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases using a newly collected multi-institutional dataset of laparoscopic cholecystectomy videos. Furthermore, we demonstrate that our approach also learns more generalizable features when tested on data from an unseen domain.


Subject(s)
Supervised Machine Learning , Surgical Procedures, Operative , Video Recording
9.
NPJ Digit Med ; 5(1): 163, 2022 Oct 28.
Article in English | MEDLINE | ID: mdl-36307544

ABSTRACT

Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.

11.
Surg Endosc ; 36(11): 8379-8386, 2022 11.
Article in English | MEDLINE | ID: mdl-35171336

ABSTRACT

BACKGROUND: A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. METHODS: LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. RESULTS: 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. CONCLUSIONS: EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety.


Subject(s)
Cholecystectomy, Laparoscopic , Humans , Cholecystectomy, Laparoscopic/methods , Video Recording , Cholangiography , Documentation , Computers
12.
Ann Surg ; 275(5): 955-961, 2022 05 01.
Article in English | MEDLINE | ID: mdl-33201104

ABSTRACT

OBJECTIVE: To develop a deep learning model to automatically segment hepatocystic anatomy and assess the criteria defining the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND: Poor implementation and subjective interpretation of CVS contributes to the stable rates of bile duct injuries in LC. As CVS is assessed visually, this task can be automated by using computer vision, an area of artificial intelligence aimed at interpreting images. METHODS: Still images from LC videos were annotated with CVS criteria and hepatocystic anatomy segmentation. A deep neural network comprising a segmentation model to highlight hepatocystic anatomy and a classification model to predict CVS criteria achievement was trained and tested using 5-fold cross validation. Intersection over union, average precision, and balanced accuracy were computed to evaluate the model performance versus the annotated ground truth. RESULTS: A total of 2854 images from 201 LC videos were annotated and 402 images were further segmented. Mean intersection over union for segmentation was 66.6%. The model assessed the achievement of CVS criteria with a mean average precision and balanced accuracy of 71.9% and 71.4%, respectively. CONCLUSIONS: Deep learning algorithms can be trained to reliably segment hepatocystic anatomy and assess CVS criteria in still laparoscopic images. Surgical-technical partnerships should be encouraged to develop and evaluate deep learning models to improve surgical safety.


Subject(s)
Bile Duct Diseases , Cholecystectomy, Laparoscopic , Deep Learning , Artificial Intelligence , Cholecystectomy, Laparoscopic/methods , Humans , Video Recording
13.
Ann Surg ; 274(1): e93-e95, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33417329

ABSTRACT

OBJECTIVE: The aim of this study was to develop a computer vision platform to automatically locate critical events in surgical videos and provide short video clips documenting the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND: Intraoperative events are typically documented through operator-dictated reports that do not always translate the operative reality. Surgical videos provide complete information on surgical procedures, but the burden associated with storing and manually analyzing full-length videos has so far limited their effective use. METHODS: A computer vision platform named EndoDigest was developed and used to analyze LC videos. The mean absolute error (MAE) of the platform in automatically locating the manually annotated time of the cystic duct division in full-length videos was assessed. The relevance of the automatically extracted short video clips was evaluated by calculating the percentage of video clips in which the CVS was assessable by surgeons. RESULTS: A total of 155 LC videos were analyzed: 55 of these videos were used to develop EndoDigest, whereas the remaining 100 were used to test it. The time of the cystic duct division was automatically located with a MAE of 62.8 ±â€Š130.4 seconds (1.95% of full-length video duration). CVS was assessable in 91% of the 2.5 minutes long video clips automatically extracted from the considered test procedures. CONCLUSIONS: Deep learning models for workflow analysis can be used to reliably locate critical events in surgical videos and document CVS in LC. Further studies are needed to assess the clinical impact of surgical data science solutions for safer laparoscopic cholecystectomy.


Subject(s)
Cholecystectomy, Laparoscopic/standards , Documentation/methods , Image Processing, Computer-Assisted/methods , Patient Safety/standards , Quality Improvement , Video Recording , Algorithms , Clinical Competence , Deep Learning , Humans , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...